# Multi-task Training

Fio Base Japanese V0.1
The first version of the Fio series Japanese embedding model, based on BERT architecture, focusing on Japanese text similarity and feature extraction tasks
Text Embedding Transformers Japanese
F
bclavie
79
7
Icd 10 Sentence Transformer 128 Dim Model
A sentence embedding model based on BioBERT, trained on multiple NLI datasets, suitable for sentence similarity calculation and semantic search tasks
Text Embedding Transformers
I
Atgenomix
1,292
0
Stt Fa Fastconformer Hybrid Large
This is a hybrid model for Persian Automatic Speech Recognition (ASR), combining transducer and CTC decoder losses, optimized based on the FastConformer architecture.
Speech Recognition Other
S
nvidia
2,398
9
Glucose Base Ja
Apache-2.0
GLuCoSE is a Japanese text embedding model based on LUKE, suitable for sentence similarity and semantic search tasks.
Text Embedding Transformers Japanese
G
pkshatech
70.71k
32
Sapbert Mnli Snli Scinli Scitail Mednli Stsb
This is a model based on sentence-transformers that can map sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks such as clustering or semantic search.
Text Embedding Transformers
S
pritamdeka
37
1
Pubmedbert Mnli Snli Scinli Scitail Mednli Stsb
A PubMedBERT-based sentence transformer model for generating 768-dimensional vector representations of sentences and paragraphs, suitable for semantic search and clustering tasks.
Text Embedding Transformers
P
pritamdeka
213
7
Biobert Mnli Snli Scinli Scitail Mednli Stsb
This is a sentence-transformers-based model that maps sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks such as clustering or semantic search.
Text Embedding Transformers
B
pritamdeka
53.20k
43
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase